1,786 research outputs found
Real-time Monocular Object SLAM
We present a real-time object-based SLAM system that leverages the largest
object database to date. Our approach comprises two main components: 1) a
monocular SLAM algorithm that exploits object rigidity constraints to improve
the map and find its real scale, and 2) a novel object recognition algorithm
based on bags of binary words, which provides live detections with a database
of 500 3D objects. The two components work together and benefit each other: the
SLAM algorithm accumulates information from the observations of the objects,
anchors object features to especial map landmarks and sets constrains on the
optimization. At the same time, objects partially or fully located within the
map are used as a prior to guide the recognition algorithm, achieving higher
recall. We evaluate our proposal on five real environments showing improvements
on the accuracy of the map and efficiency with respect to other
state-of-the-art techniques
Real-time monocular SLAM: Why filter?
Abstract—While the most accurate solution to off-line structure from motion (SFM) problems is undoubtedly to extract as much correspondence information as possible and perform global optimisation, sequential methods suitable for live video streams must approximate this to fit within fixed computational bounds. Two quite different approaches to real-time SFM — also called monocular SLAM (Simultaneous Localisation and Mapping) — have proven successful, but they sparsify the problem in different ways. Filtering methods marginalise out past poses and summarise the information gained over time with a probability distribution. Keyframe methods retain the optimisation approach of global bundle adjustment, but computationally must select only a small number of past frames to process. In this paper we perform the first rigorous analysis of the relative advantages of filtering and sparse optimisation for sequential monocular SLAM. A series of experiments in simulation as well using a real image SLAM system were performed by means of covariance propagation and Monte Carlo methods, and comparisons made using a combined cost/accuracy measure. With some well-discussed reservations, we conclude that while filtering may have a niche in systems with low processing resources, in most modern applications keyframe optimisation gives the most accuracy per unit of computing time. I
Coherent delocalization: Views of entanglement in different scenarios
The concept of entanglement was originally introduced to explain correlations
existing between two spatially separated systems, that cannot be described
using classical ideas. Interestingly, in recent years, it has been shown that
similar correlations can be observed when considering different degrees of
freedom of a single system, even a classical one. Surprisingly, it has also
been suggested that entanglement might be playing a relevant role in certain
biological processes, such as the functioning of pigment-proteins that
constitute light-harvesting complexes of photosynthetic bacteria. The aim of
this work is to show that the presence of entanglement in all of these
different scenarios should not be unexpected, once it is realized that the very
same mathematical structure can describe all of them. We show this by
considering three different, realistic cases in which the only condition for
entanglement to exist is that a single excitation is coherently delocalized
between the different subsystems that compose the system of interest
Getting a Piece of the Pie: Federal Grants to Faith-based Social Service Organizations
Research delves into the grantmaking of the White House Office of Faith-Based and Community Initiatives, established under the Bush Administration in 2001
- …